Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Online federated incremental learning algorithm for blockchain
LUO Changyin, CHEN Xuebin, MA Chundi, WANG Junyu
Journal of Computer Applications    2021, 41 (2): 363-371.   DOI: 10.11772/j.issn.1001-9081.2020050609
Abstract696)      PDF (2197KB)(985)       Save
As generalization ability of the out-dated traditional data processing technology is weak, and the technology did not take into account the multi-source data security issues, a blockchain oriented online federated incremental learning algorithm was proposed. Ensemble learning and incremental learning were applied to the framework of federated learning, and stacking ensemble algorithm was used to integrate the local models and the model parameters in model training phase were uploaded to the blockchain with fast synchronization. This made the accuracy of the constructed global model only fall by 1%, while the safety in the stage of training and the stage of storage was improved, so that the costs of the data storage and the transmission of model parameters were reduced, and at the same time, the risk of data leakage caused by model gradient updating was reduced. Experimental results show that the accuracy of the model is over 91.5% and the variance of the model is lower than 10 -5, and compared with the traditional integrated data training model, the model has the accuracy slightly reduced, but has the security of data and model improved with the accuracy of the model guaranteed.
Reference | Related Articles | Metrics
Classification algorithm based on undersampling and cost-sensitiveness for unbalanced data
WANG Junhong, YAN Jiarong
Journal of Computer Applications    2021, 41 (1): 48-52.   DOI: 10.11772/j.issn.1001-9081.2020060878
Abstract383)      PDF (752KB)(663)       Save
Focusing on the problem that the minority class in the unbalanced dataset has low prediction accuracy by traditional classifiers, an unbalanced data classification algorithm based on undersampling and cost-sensitiveness, called USCBoost (UnderSamples and Cost-sensitive Boosting), was proposed. Firstly, the majority class samples were sorted from large weight sample to small weight sample before base classifiers being trained by the AdaBoost (Adaptive Boosting) algorithm in each iteration, the majority class samples with the number equal to the number of minority class samples were selected according to sample weights, and the weights of majority class samples after sampling were normalized and a temporary training set was formed by these majority class samples and the minority class samples to train base classifiers. Secondly, in the weight update stage, higher misclassification cost was given to the minority class, which made the weights of minority class samples increase faster and the weights of majority class samples increase more slowly. On ten sets of UCI datasets, USCBoost was compared with AdaBoost, AdaCost (Cost-sensitive AdaBoosting), and RUSBoost (Random Under-Sampling Boosting). Experimental results show that USCBoost has the highest evaluation indexes on six sets and nine sets of datasets under the F1-measure and G-mean criteria respectively. The proposed algorithm has better classification performance on unbalanced data.
Reference | Related Articles | Metrics
3D point cloud classification and segmentation network based on Spider convolution
WANG Benjie, NONG Liping, ZHANG Wenhui, LIN Jiming, WANG Junyi
Journal of Computer Applications    2020, 40 (6): 1607-1612.   DOI: 10.11772/j.issn.1001-9081.2019101879
Abstract591)      PDF (689KB)(854)       Save

The traditional Convolutional Neural Network (CNN) cannot directly process point cloud data, and the point cloud data must be converted into a multi-view or voxelized grid, which leads to a complicated process and low point cloud recognition accuracy. Aiming at the problem, a new point cloud classification and segmentation network called Linked-Spider CNN was proposed. Firstly, the deep features of point cloud were extracted by adding more Spider convolution layers based on Spider CNN. Secondly, by introducing the idea of residual network, short links were added to every Spider convolution layer to form residual blocks. Thirdly, the output features of each layer of residual blocks were spliced and fused to form the point cloud features. Finally, the point cloud features were classified by three-layer fully connected layers or segmented by multiple convolution layers. The proposed network was compared with other networks such as PointNet, PointNet++ and Spider CNN on ModelNet40 and ShapeNet Parts datasets. The experimental results show that the proposed network can improve the classification accuracy and segmentation effect of point clouds, and it has faster convergence speed and stronger robustness.

Reference | Related Articles | Metrics
Agricultural greenhouse temperature prediction method based on improved deep belief network
ZHOU Xiangyu, CHENG Yong, WANG Jun
Journal of Computer Applications    2019, 39 (4): 1053-1058.   DOI: 10.11772/j.issn.1001-9081.2018091876
Abstract425)      PDF (890KB)(306)       Save
Concerning low representation ability and long learning time for complex and variable environmental factors in greenhouses, a prediction method based on improved Deep Belief Network (DBN) combined with Empirical Mode Decomposition (EMD) and Gated Recurrent Unit (GRU) was proposed. Firstly, the temperature environment factor was decomposed by EMD, and then the decomposed intrinsic mode function and residual signal were predicted at different degrees. Secondly, glia was introduced to improve DBN, and the decomposition signal was used to multi-attribute feature extraction combined with illumination and carbon dioxide. Finally, the signal components predicted by GRU were added together to obtain the final prediction result. The simulation results show that compared with empirical decomposition belief network (EMD-DBN) and glial DBN-glial chains (DBN-g), the prediction error of the proposed method is reduced by 6.25% and 5.36% respectively, thus verifying its effectiveness and feasibility of predictions in greenhouse time series environment with strong noise and coupling.
Reference | Related Articles | Metrics
Mixed density peaks clustering algorithm
WANG Jun, ZHOU Kai, CHENG Yong
Journal of Computer Applications    2019, 39 (2): 403-408.   DOI: 10.11772/j.issn.1001-9081.2018061373
Abstract542)      PDF (842KB)(361)       Save
As a new density-based clustering algorithm, clustering by fast search and find of Density Peaks (DP) algorithm regards each density peak as a potential clustering center when dealing with a single cluster with multiple density peaks, therefore it is difficult to determine the correct number of clusters in the data set. To solve this problem, a mixed density peak clustering algorithm namely C-DP was proposed. Firstly, the density peak points were considered as the initial clustering centers and the dataset was divided into sub-clusters. Then, learned from the Clustering Using Representatives algorithm (CURE), the scattered representative points were selected from the sub-clusters, the clusters of the representative point pairs with the smallest distance were merged, and a parameter contraction factor was introduced to control the shape of the clusters. The experimental results show that the C-DP algorithm has better clustering effect than the DP algorithm on four synthetic datasets. The comparison of the Rand Index indicator on real datasets shows that on the dataset S1 and 4k2_far, the performance of C-DP is 2.32% and 1.13% higher than that of the DP. It can be seen that the C-DP algorithm improves the accuracy of clustering when datasets contain multiple density peaks in a single cluster.
Reference | Related Articles | Metrics
Multi-sensor fault diagnosis method for quad-rotor aircraft based on adaptive observer
WANG Rijun, BAI Yue, ZENG Zhiqiang, DUAN Nengquan, DANG Changying, DU Wenhua, WANG Junyuan
Journal of Computer Applications    2018, 38 (9): 2735-2741.   DOI: 10.11772/j.issn.1001-9081.2018030561
Abstract665)      PDF (1003KB)(396)       Save
In order to detect and diagnose multi-sensor faults of quad-rotor aircraft, a multi-sensor fault diagnosis method based on adaptive observer was proposed. Firstly, the sensor fault was regarded as the virtual actuator fault after the establishment of the aircraft dynamics model and sensor model, the multi-sensor fault detection and diagnosis system of quad-rotor aircraft was constructed. Secondly, a nonlinear fault observer was designed to realize multi-sensor fault detection and isolation, and the nonlinear adaptive observer was designed based on the Laypunov method to estimate the mutiple fault biases. Finally, the stability and parameter convergence of the adaptive laws were proved in the presence of sensor measurement noise. The experimental results show that the method can detect and isolate the faults of multiple sensors effectively, and realize the estimation and tracking of multiple sensors fault biases simultaneously.
Reference | Related Articles | Metrics
Fast online distributed dual average optimization algorithm
LI Dequan, WANG Junya, MA Chi, ZHOU Yuejin
Journal of Computer Applications    2018, 38 (8): 2337-2342.   DOI: 10.11772/j.issn.1001-9081.2018010189
Abstract1337)      PDF (814KB)(381)       Save
To improve the convergence speed of distributed online optimization algorithms, a fast first-order Online Distributed Dual Average optimization (FODD) algorithm was proposed by sequentially adding edges to the underlying network topology. Firstly, aiming at solving the problem of the online distributed optimization to make the selected edge and network model mix quickly by using the method of edge addition, a mathematical model was established and solved by FODD. Secondly, the relationship between network topology designed and the convergence rate of the online distributed dual average algorithm was revealed, which clearly showed that, by improving the algebraic connectivity of the underlying topology network, the Regret bound could also be greatly improved. The Online Distributed Dual Average (ODDA) algorithm was extended from static networks to time-varying networks. Meanwhile, the proposed FODD algorithm was proved to be convergent and the convergence rate was specified. Finally, the results of numerical simulations show that, compared with existing algorithms such as ODDA, the proposed FODD algorithm has better convergence performance.
Reference | Related Articles | Metrics
LASSO based image reversible watermarking
ZHENG Hongchang, WANG Chuntao, WANG Junxiang
Journal of Computer Applications    2018, 38 (8): 2287-2292.   DOI: 10.11772/j.issn.1001-9081.2018020471
Abstract380)      PDF (1044KB)(202)       Save
For the Difference Expansion-Histogram Shifting (DE-HS) based reversible watermarking, improving the prediction accuracy helps to decrease the prediction errors, resulting in higher embedding capacity at the same embedding distortion. To predict image pixels more accurately, an LASSO (Least Absolute Shrinkage and Selection Operator) based local predictor was proposed. Specifically, by taking into account the fact that there exist edges and textures in natural images, the problem of image pixel prediction was formulated as the optimization problem of LASSO, then the prediction coefficients were obtained by solving the optimization problem, generating prediction errors accordingly. By applying the technique of DE-HS on the yielded prediction errors, an LASSO-based reversible watermarking scheme was designed. The experimental results show that compared with the least-square-based predictor, the proposed scheme has higher Peak Signal-to-Noise Ratio (PSNR) when embedding the same data.
Reference | Related Articles | Metrics
Improved hybrid recommendation algorithm based on stacked denoising autoencoder
YANG Shuai, WANG Juan
Journal of Computer Applications    2018, 38 (7): 1866-1871.   DOI: 10.11772/j.issn.1001-9081.2017123060
Abstract758)      PDF (941KB)(429)       Save
Concerning the problem that traditional collaborative filtering algorithm just utilizes users' ratings on items when generating recommendation, without considering users' labels or comments, which can not reflect users' real preference on different items and the prediction accuracy is not high and easily overfits, a Stacked Denoising AutoEncoder (SDAE)-based improved Hybrid Recommendation (SDHR) algorithm was proposed. Firstly, SDAE was used to extract items' explicit features from users' free-text labels. Then, Latent Factor Model (LFM) algorithm was improved, the LFM's abstract item features were replaced with extracted explicit ones to train matrix decomposition model. Finally, the user-item preference matrix was used to generate recommendations. Experimental tests on the dataset MovieLens showed that the accuracy of the proposed algorithm was improved by 38.4%, 16.1% and 45.2% respectively compared to the three recommendation models (including the model based on label-based weights with collaborative filtering, the model based on SDAE and extreme learning machine, and the model based on recurrent neural networks). The experimental results show that the proposed algorithm can make full use of items' free-text label information to improve recommendation performance.
Reference | Related Articles | Metrics
Video region detection algorithm for virtual desktop protocol
HOU Wenhui, WANG Junfeng
Journal of Computer Applications    2018, 38 (5): 1463-1469.   DOI: 10.11772/j.issn.1001-9081.2017102610
Abstract503)      PDF (1194KB)(376)       Save
At present, there are some problems when video is played on virtual desktop protocol with partitioning mechanism, such as the video is not smooth and the bandwidth is highly occupied. In this paper, a Video Area Detection Algorithm (VRDA) was proposed based on virtual desktop protocol, called SPICE (Simple Protocol for Independent Computing Environment). Video regions were detected in the process of playing video on virtual desktop protocol, each of which was intercepted as a complete video frame, and decompressed by MPEG4 (Moving Pictures Experts Group-4) video compression algorithm instead of the original compression algorithm MJPEG (Motion JPEG) with lower efficiency. A evaluation metric named DAETD (Difference between Actual and Expected Display Time) was proposed to test the fluency of the improved SPICE, meanwhile, the bandwidth consumption of SPICE was also tested. The experimental results show that the proposed algorithm can improve the video fluency and reduce the network bandwidth consumption.
Reference | Related Articles | Metrics
Ship course identification model based on recursive least squares algorithm with dynamic forgetting factor
SUN Gongwu, XIE Jirong, WANG Junxuan
Journal of Computer Applications    2018, 38 (3): 900-904.   DOI: 10.11772/j.issn.1001-9081.2017082041
Abstract646)      PDF (768KB)(412)       Save
To improve the speed and robustness of Recursive Least Squares (RLS) algorithm with forgetting factor in the parameter identification of ship course motion mathematical model, an RLS algorithm with dynamic forgetting factor based on fuzzy control was proposed. Firstly, the residual between the theoretical model output and actual model output was calculated. Secondly, an evaluation function was constructed on the basis of the residual, to assess the parameter identification error. Then, a fuzzy controller with evaluation function and its change rate as two inputs was adopted to realize the dynamic adjustment of the forgetting factor. Combined with designed fuzzy control rule table, the modification of the forgetting factor was obtained by the fuzzy controller at last. Simulation results show that the forgetting factor can be adjusted according to the parameter identification error in the presented algorithm, which achieves higher precision and faster parameter identification than RLS algorithm with constant forgetting factor.
Reference | Related Articles | Metrics
Distributed quantized subgradient optimization algorithm for multi-agent switched networks
LI Jiadi, MA Chi, LI Dequan, WANG Junya
Journal of Computer Applications    2018, 38 (2): 509-515.   DOI: 10.11772/j.issn.1001-9081.2017081927
Abstract416)      PDF (948KB)(321)       Save
As the existing distributed subgradient optimization algorithms are mainly based on ideal assumptions:the network topology is balanced and the communication among the network is usually the exact information of a state variable of each agent. To relax these assumptions, a distributed subgradient optimization algorithm for switched networks was proposed based on limited quantized information communication. All information along each dynamical edge was quantified by a uniform quantizer with a limited quantization level before being sent in an unbalanced switching network, then the convergence of the multi-agent distributed quantized subgradient optimization algorithm was proved by using non-quadratic Lyapunov function method. Finally, the simulation examples were given to demonstrate the effectiveness of the proposed algorithm. The simulation results show that, under the condition of the same bandwidth, the convergence rate of the proposed optimization algorithm can be improved by adjusting the parameters of the quantizer. Therefore, the proposed optimization algorithm is more suitable for practical applications by weakening the assumptions on the adjacency matrix and the requirement of the network bandwidth.
Reference | Related Articles | Metrics
HK extended model with tunable degree correlation and clustering coefficient
ZHOU Yujiang, WANG Juan
Journal of Computer Applications    2018, 38 (10): 2971-2975.   DOI: 10.11772/j.issn.1001-9081.2018030592
Abstract481)      PDF (736KB)(293)       Save
Concerning the problem that most of the existing social network growing models have negative degree correlation, considering the characteristics of positive degree correlations and high clustering coefficients, a new social network growing model was proposed based on Holme and Kim (HK) model. Firstly, the topological structure of a real-world social network was analyzed to obtain some important topological parameters of real social networks. Secondly, the HK model was improved by introducing triad formation mechanism, namely HK extended model with Turnable Degree Correlation and Clustering coefficient (HK-TDC&C), by which both clustering coefficients and degree correlations in the network could be adjusted. The model could be used to construct social networks with various topological properties. Finally, using mean field theory, the degree distribution of the model was analyzed, and Matlab was used for numerical simulation to calculate other topological parameters of the network. The results show that, by turning preferred attachment parameters and connection probabilities, the social network constructed by HK-TDC&C model can satisfy the basic characteristics of social networks, including scale-free characteristics, small world characteristics, high clustering coefficient characteristics and degree positive correlation properties, and its topology is closer to the real social network.
Reference | Related Articles | Metrics
General bound estimation method for pattern measures over uncertain datasets
WANG Ju, LIU Fuxian, JIN Chunjie
Journal of Computer Applications    2018, 38 (1): 165-170.   DOI: 10.11772/j.issn.1001-9081.2017061582
Abstract490)      PDF (906KB)(276)       Save
Concerning the problem of bound estimation for pattern measures in constraint-based pattern mining, a general bound estimation method for pattern measures over uncertain datasets was proposed. According to the characteristics of uncertain transaction datasets with weight, firstly, a general estimation framework for common pattern measures was designed. Secondly, a fast estimation method for the upper bound of pattern measures under the designed framework was provided. Finally, two commonly used pattern measures were introduced to verify the feasibility of the proposed method. In the experiment, the runtime and memory usage of the Potential High-Utility Itemsets UPper-bound-based mining (PHUI-UP) algorithm with transaction weighted utilization, the proposed upper bound and the actual upper bound were compared. The experimental results show that the proposed method can take less memory usage and runtime to realize the estimation of the upper bound of pattern utilization.
Reference | Related Articles | Metrics
Prediction of rainfall based on improved Adaboost-BP model
WANG Jun, FEI Kai, CHENG Yong
Journal of Computer Applications    2017, 37 (9): 2689-2693.   DOI: 10.11772/j.issn.1001-9081.2017.09.2689
Abstract562)      PDF (833KB)(412)       Save
Aiming at the problem that the current classification algorithm has low generalization ability and insufficient precision, a combination classification model combining Adaboost algorithm and Back-Propagation (BP) neural network was proposed. Multiple neural network weak classifiers were constructed and weighted, which were linearly combined into a strong classifier. The improved Adaboost algorithm aimed to optimize the normalization factor. The sample weight update strategy was adjusted during the lifting process, to minimize the normalization factor, increasing the number of weak classifiers while reducing the error upper bound estimate was ensured, and the generalization ability and classification accuracy of the final integrated strong classifier was improved. A daily precipitation model of 6 sites in Jiangsu province was selected as the experimental data, and 7 precipitation models were established. Among the many factors influencing the rainfall, 12 attributes with large correlation with precipitation were selected as the forecasting factors. The results show that the improved Adaboost-BP combination model has better performance, especially for the site 58259, and the overall classification accuracy is 81%. Among the 7 grades, the prediction accuracy of class-0 rainfall is the best, and the accuracy of other types of rainfall forecast is improved. The theoretical derivation and experimental results show that the improvement can improve the prediction accuracy.
Reference | Related Articles | Metrics
Interval-value attribute reduction algorithm for meteorological observation data based on genetic algorithm
ZHENG Zhongren, CHENG Yong, WANG Jun, ZHONG Shuiming, XU Liya
Journal of Computer Applications    2017, 37 (9): 2678-2683.   DOI: 10.11772/j.issn.1001-9081.2017.09.2678
Abstract501)      PDF (1007KB)(471)       Save
Aiming at the problems that the purpose of the meteorological observation data acquisition is weak, the redundancy of data is high, and the number of single values in the observation data interval is large, the precision of equivalence partitioning is low, an attribute reduction algorithm for Meteorological Observation data Interval-value based on Genetic Algorithm (MOIvGA) was proposed. Firstly, by improving the similarity degree of interval value, the proposed algorithm could be suitable for both single value equivalence relation judgment and interval value similarity analysis. Secondly, the convergence of the algorithm was improved by the improved adaptive genetic algorithm. Finally, the simulation experiments show that the number of the iterations of the proposed algorithm is reduced by 22, compared with the method which operated AGAv (Adaptive Genetic Attribute reduction) algorithm to solve the optimal value. In the time interval of 1 hour precipitation classification, the average classification accuracy of the MOIvGA (λ-Reduction in Interval-valued decision table based on Dependence) algorithm is 6.3% higher than that of RIvD algorithm; the accuracy of no rain forecasting is increased by 7.13%; at the same time, the classification accuracy can be significantly impoved by the attribute subset received by operating the MOIvGA algorithm. Therefore, the MOIvGA algorithm can increase the convergence rate and the classification accuracy in the analysis of interval value meteorological observation data.
Reference | Related Articles | Metrics
Dynamic centrality analysis of vehicle Ad Hoc networks
FENG Huifang, WANG Junxia
Journal of Computer Applications    2017, 37 (2): 445-449.   DOI: 10.11772/j.issn.1001-9081.2017.02.0445
Abstract581)      PDF (830KB)(445)       Save
Dynamic network topology is one of the important characteristics of vehicle Ad Hoc networks (VANET). Based on Intelligent Driver Model with Lane Changes (IDM-LC), the VanetMobiSim was used to study the dynamic centrality of topology for VANET in detail. The temporal network model of VANET was built. The evaluation method of dynamic centrality based on the attenuation factor and information store-and-forward index was established, which not only can describe the relation between the current network topology and the historical one, but also can depict the store-and-forward mechanism of information transmission in VANET. Finally, the dynamic centrality of VANET was analyzed through the simulation experiment. The results show that although the dynamic centrality of VANET topology varies with time and parameters, the ranking of important nodes remains relatively stable. This conclusion not only can help us identify the relay nodes of information transmission better to achieve the successful delivery of information, but also provides guidance for invulnerability of VANET topology.
Reference | Related Articles | Metrics
Query optimization for distributed database based on parallel genetic algorithm and max-min ant system
LIN Jiming, BAN Wenjiao, WANG Junyi, TONG Jichao
Journal of Computer Applications    2016, 36 (3): 675-680.   DOI: 10.11772/j.issn.1001-9081.2016.03.675
Abstract610)      PDF (962KB)(540)       Save
Since relation and its fragments in the distributed database have multiple copies and multi-site storage characteristics, which increases the time and space complexity of query and results in lower search efficiency of Query Execution Plan (QEP), a Parallel Genetic Algorithm and Max-Min Ant System (PGA-MMAS) based on design principles of Fragments Site Selector (FSS) was proposed. Firstly, based on the design requirement of the distributed information management system for actual business, FSS was designed, which selected the best one from many copies of relationship heuristically to decrease query join cost and search space of PGA-MMAS. Secondly, Genetic Algorithm (GA) encoded the final join relations and conducted parallel genetic operation to get a set of relative optimal QEPs by taking advantage of quick convergence of GA. Then, QEPs were transformed into the initial pheromone distribution of Max-Min Ant System (MMAS) to obtain the optimal QEP quickly and efficiently. Finally, simulation experiments were conducted under different number of relation conditions, and the results show that PGA-MMAS based on FSS searches optimal QEP more efficiently than original GA, Fragments Site Selector-Genetic Algorithm (FSS-GA), Fragments Site Selector-Max-Min Ant System (FSS-MMAS) and Fragments Site Selector-Genetic Algorithm-Max-Min Ant System (FSS-GA-MMAS). And in the actual engineering application, the proposed algorithm can search high-quality QEP to improve the efficiency of multi-join query in distributed database.
Reference | Related Articles | Metrics
Optimization design of preventing fault injection attack on distributed embedded systems
WEN Liang, JIANG Wei, PAN Xiong, ZHOU Keran, DONG Qi, WANG Junlong
Journal of Computer Applications    2016, 36 (2): 495-498.   DOI: 10.11772/j.issn.1001-9081.2016.02.0495
Abstract403)      PDF (613KB)(811)       Save
Security-critical distributed systems have faced with malicious snooping and fault injection attack challenges. Traditional researches mainly focus on preventing malicious snooping which disregard fault injection attack threat. Concerning the above problem, the fault detection for message' encryption/decryption was considered, to maximize the fault coverage and minimize the heterogeneous degree of the messages' fault coverage. Firstly, Advanced Encryption Standard (AES) was used to protect confidentiality. Secondly, five fault detection schemes were proposed, and their fault coverage rates and time overheads were derived and measured, respectively. Finally, an efficient heuristic algorithm based on Simulated Annealing (SA) under the real-time constraint was proposed, which can maximize the fault coverage and minimize the heterogeneity. The experimental results show that the objective function value achieved by the proposed algorithm is 18% higher than that of the greedy algorithm at least, verifying the efficiency and robustness of the proposed algorithm.
Reference | Related Articles | Metrics
Distributed fault detection for wireless sensor network based on cumulative sum control chart
LIU Qiuyue, CHENG Yong, WANG Jun, ZHONG Shuiming, XU Liya
Journal of Computer Applications    2016, 36 (11): 3016-3020.   DOI: 10.11772/j.issn.1001-9081.2016.11.3016
Abstract650)      PDF (908KB)(434)       Save
With the stringent resources and distributed nature in wireless sensor networks, fault diagnosis of sensor nodes faces great challenges. In order to solve the problem that the existing approaches of diagnosing sensor networks have high false alarm ratio and considerable computation redundancy on nodes, a new fault detection mechanism based on Cumulative Sum Chart (CUSUM) and neighbor-coordination was proposed. Firstly, the historical data on a single node were analyzed by CUSUM to improve the sensitivity of fault diagnosis and locate the change point. Then, the fault nodes were detected though judging the status of nodes by the data exchange between neighbor nodes. The experimental results show that the detection accuracy is over 97.7% and the false alarm ratio is below 2% when the sensor fault probability in wireless sensor networks is up to 35%. Hence, the proposed algorithm has a high detection accuracy and low false alarm ratio even in the conditions of high fault probabilities and reduces the influence of sensor fault probability clearly.
Reference | Related Articles | Metrics
Data preprocessing based recovery model in wireless meteorological sensor network
WANG Jun, YANG Yang, CHENG Yong
Journal of Computer Applications    2016, 36 (10): 2647-2652.   DOI: 10.11772/j.issn.1001-9081.2016.10.2647
Abstract674)      PDF (1082KB)(693)       Save
To solve the problem of excessive communication energy consumption caused by large number of sensor nodes and high redundant sensor data in wireless meteorological sensor network, a Data Preprocessing Model based on Joint Sparsity (DPMJS) was proposed. By combining the meteorological forecast value with every cluster head's value in Wireless Sensor Network (WSN), DPMJS was used to compute a common portions to process sensor data. A data collection framework based on distributed compressed sensing was also applied to reduce data transmission and balance energy consumption in cluster network; data measured in common nodes was recovered in sink node, so as to reduce data communication radically. A suitable method to sparse the abnormal data was also designed. In simulation, using DPMJS can enhance the data sparsity by exploiting spatio-temporal correlation efficiently, and improve data recovery rate by 25%; compared with compressed sensing, data recovery rate is improved by 46%; meanwhile, the abnormal data processing can recovery data successfully by high probability of 96%. Experimental results indicate that the proposed data preprocessing model can increase efficiency of data recovery, reduce the amount of transmission significantly, and prolong the network lifetime.
Reference | Related Articles | Metrics
Cognitive radar waveform design for extended target detection based on signal-to-clutter-and-noise ratio
YAN Dong, ZHANG Zhaoxia, ZHAO Yan, WANG Juanfen, YANG Lingzhen, SHI Junpeng
Journal of Computer Applications    2015, 35 (7): 2105-2108.   DOI: 10.11772/j.issn.1001-9081.2015.07.2105
Abstract521)      PDF (703KB)(571)       Save

Focusing on the issue that the Signal-to-Clutter-and-Noise Ratio (SCNR) of echo signal is low when cognitive radar detects extended target, a waveform design method based on SCNR was proposed. Firstly, the relation between the SCNR of cognitive radar echo signal and the Energy Spectral Density (ESD) of transmitted signal, was gotten by establishing extended target detection model other than previous point target model; secondly, according to the maximum SCNR criterion, the global optimal solution of the transmitted signal ESD was deduced; finally, in order to get a meaningful time-domain signal, ESD was synthesized to be a constant amplitude based on phase-modulation after combining with the Minimum Mean-Square Error (MMSE) and iterative algorithm, which met the emission requirements of radar. In the simulation, the amplitude of time-domain synthesized signal is uniform, and its SCNR at the output of the matched filter is 19.133 dB, only 0.005 dB less than the ideal value. The results show that not only can the time-domain waveform meet the requirement of constant amplitude, but also the SCNR obtained at receiver output can achieve the best approximation to the ideal value, and it improves the performance of the extended target detection.

Reference | Related Articles | Metrics
Improved artificial bee colony algorithm based on P system for 0-1 knapsack problem
SONG Xiaoxiao, WANG Jun
Journal of Computer Applications    2015, 35 (7): 2088-2092.   DOI: 10.11772/j.issn.1001-9081.2015.07.2088
Abstract561)      PDF (726KB)(515)       Save

Aiming at the defects of resolving large scale 0-1 knapsack problem with existed algorithm, an Improved Artificial Bee Colony algorithm based on P Systems (IABCPS) was introduced in this paper. The idea of Membrane Computing (MC), polar coordinate coding and One Level Membrane Structure (OLMS) was used by IABCPS. Evolutionary rules of improved Artificial Bee Colony (ABC) algorithm and transformation or communication rules in P systems were adopted to design its algorithm. The limit of number of trials "limit" was adjusted to keep the balance of exploitation and exploration. The experimental results show that IABCPS can find the optimum solutions in solving small scale knapsack problems. In solving a knapsack problem with 200 items, compared with Clonal Selection Immune Genetic Algorithm (CSIGA), IABCPS increases the average results by 0.15% and decreases variance by 97.53%; compared with ABC algorithm, IABCPS increases the average results by 4.15% and decreases variance by 99.69%. The results demonstrate that IABCPS has good ability of optimization and stability. Compared with Artificial Bee Colony algorithm based on P Systems (ABCPS) in solving large scale knapsack problem with 300, 500, 700 and 1000 items respectively, IABCPS increases the average results by 1.25%,3.93%,6.75% and 11.21%, and the ratio of the variance and the number of experiments keeps in single digits. It shows its strong robustness.

Reference | Related Articles | Metrics
Graph transduction via alternating minimization method based on multi-graph
XIU Yu, WANG Jun, WANG Zhongqun, LIU Sanmin
Journal of Computer Applications    2015, 35 (6): 1611-1616.   DOI: 10.11772/j.issn.1001-9081.2015.06.1611
Abstract751)      PDF (929KB)(438)       Save

The performance of the Graph-based Semi-Supervised Learning (GSSL) method based on one graph mainly depends on a well-structured single graph and most algorithms based on multiple graphs are difficult to be applied while the data has only single view. Aiming at the issue, a Graph Transduction via Alternating Minimization method based on Multi-Graph (MG-GTAM) was proposed. Firstly, using different graph construction parameters, multiple graphs were constructed from data with one single view to represent data point relation. Secondly,the most confident unlabeled examples were chosen for pseudo label assignment through the integration of a plurality of map information and imposed higher weights to the most relevant graphs based on alternating optimization,which optimized agreement and smoothness of prediction function over multiple graphs. Finally, more accurate labels were given over the entire unlabeled examples by combining the predictions of all individual graphs. Compared with the classical algorithms of Local and Global Consistency (LGC), Gaussian Fields and Harmonic Functions (GFHF), Graph Transduction via Alternation Minimization (GTAM), Combined Graph Laplacian (CGL), the classification error rates of MG-GTAM decrease on data sets of COIL20 and NEC Animal. The experimental results show that the proposed method can efficiently represent data point relation with multiple graphs, and has lower classification error rate.

Reference | Related Articles | Metrics
Research and simulation of radar side-lobe suppression based on Kalman-minimum mean-square error
ZHANG Zhaoxia, WANG Huihui, FU Zheng, YANG Lingzhen, WANG Juanfen, LIU Xianglian
Journal of Computer Applications    2015, 35 (5): 1488-1491.   DOI: 10.11772/j.issn.1001-9081.2015.05.1488
Abstract565)      PDF (608KB)(28470)       Save

Concerning the problem that the weak target might be covered by the range side-lobes of the strong one and the range side-lobes could only be suppressed to a certain value, an improved Kalman-Minimum Mean-Square Error (K-MMSE) algorithm was proposed in this paper. This algorithm combined the Kalman filter with the Minimum Mean-Square Error (MMSE), and it was an effective method for suppressing range side-lobes of adaptive pulse compression. In the simulation, the proposed algorithm was compared with the traditional matched filter and other improved matched filters such as MMSE in a single target or multiple targets environments, and then found that the side-lobe levels, the Peak-SideLobe Ratio (PSLR) and Integrated SideLobe Ratio (ISLR) of the Point Spread Function (PSF) were all decreased obviously in comparison with the previous two methods. The simulation results show that the method can suppress range side-lobes well and detect the weak targets well either under both the condition of a single target and the condition of multiple targets.

Reference | Related Articles | Metrics
Energy consumption optimization of stochastic real-time tasks for dependable embedded system
PAN Xiong, JIANG Wei, WEN Liang, ZHOU Keran, DONG Qi, WANG Junlong
Journal of Computer Applications    2015, 35 (12): 3515-3519.   DOI: 10.11772/j.issn.1001-9081.2015.12.3515
Abstract538)      PDF (864KB)(433)       Save
The WCET (Worst Case Execution Time) is taken as the actual execution time of the task, which may cause a great waste of system resource. In order to solve the problem, a method based on stochastic task probability model was proposed. Firstly, Dynamic Voltage and Frequency Scaling (DVFS) was utilized to reduce the energy consumption by considering the effect of DVFS on the reliability of the system, the specific probability distribution of task execution time and the task requirement of No-Deadline Violation Probability (NDVP). Then, a new optimization algorithm with the operation time of polynomial was proposed based on the dynamic programming algorithm. In addition, the execution overhead of the algorithm was reduced by designing the state eliminating rules. The simulation results show that, compared with the optimal algorithm of the model of WCET, the proposed algorithm can reduce the system energy consumption by more than 30%. The experimental results indicate that considering the random execution time of tasks can save the system resources while ensuring the reliability of the system.
Reference | Related Articles | Metrics
Regional cluster-based lifetime optimization strategy for large-scale wireless sensor networks
WANG Yan, ZHANG Tingting, SONG Zhirun, WANG Junlu, GUO Jingyu
Journal of Computer Applications    2015, 35 (11): 3031-3037.   DOI: 10.11772/j.issn.1001-9081.2015.11.3031
Abstract416)      PDF (1095KB)(709)       Save
In view of the characteristics of wide monitoring area and large number of sensors in large-scale monitoring systems like environment monitoring and power grid ice-disaster monitoring system, a Regional Cluster-based lifetime optimization Strategy for large-scale wireless sensor network (RCS) was proposed to save the network energy consumption and prolong the lifetime of the network. The strategy firstly used AGNES (Agglomerative Nesting) algorithm to divide the network into several subareas based on node location for optimizing the distribution of cluster heads. Secondly, uneven clusters would be conducted after cluster heads were generated, and a time threshold value was set to balance node energy consumption. Finally, for inter-cluster communication, a multi-hop routing was adopted by constructing minimum spanning tree on the basis of calculating network energy cost to balance the energy consumption of the cluster heads. In the simulation, compared with LEACH (Low Energy Adaptive Clustering Hierarchy) and EEUC (Energy-Efficient Uneven Clustering) algorithm, RCS respectively reduced the cluster head nodes' energy consumption by 45.1% and 2.4% on average; and respectively extend the network lifetime by 38% and 3.7%.The simulation results show that RCS can be more efficient to balance the overall network energy consumption, and significantly prolong the network lifetime.
Reference | Related Articles | Metrics
Intrusion detection model based on decision tree and Naive-Bayes classification
YAO Wei, WANG Juan, ZHANG Shengli
Journal of Computer Applications    2015, 35 (10): 2883-2885.   DOI: 10.11772/j.issn.1001-9081.2015.10.2883
Abstract417)      PDF (465KB)(557)       Save
Intrusion detection requires the system to identify network intrusions quickly and accurately, so it also requires high efficiency of the detection algorithm. In order to improve the efficiency and accuracy of intrusion detection system, and reduce the rate of false positives and false negatives, a H-C4.5-NB intrusion detection model combined C4.5 with Naive Bayes (NB) was proposed after fully analyzing the C4.5 and NB algorithm. The distribution of decision category was described in the form of probability in this model, and the final decision results were given in the form of C4.5 and NB probability weighted sum. Finally the performance of the model was tested by KDD 99 data set. The experimental results showed that the accuracy of Denial of Service (DoS) was improved about 9% and the accuracy of U2R and R2L was improved about 20%-30% in H-C4.5-NB compared to the traditional methods such as C4.5, NB and NBTree.
Reference | Related Articles | Metrics
Vehicle navigation method based on trinocular vision
WANG Jun LIU Hongyan
Journal of Computer Applications    2014, 34 (6): 1762-1764.   DOI: 10.11772/j.issn.1001-9081.2014.06.1762
Abstract163)      PDF (607KB)(537)       Save

A classification method based on trinocular stereovision, which consisted of geometrical classifier and color classifier, was proposed to autonomously guide vehicles on unstructured terrain. In this method, rich 3D data which were taken by stereovision system included range and color information of the surrounding environment. Then the geometrical classifier was used to detect the broad class of ground according to the collected data, and the color classifier was adopted to label ground subclasses with different colors. During the classifying stage, the new classification data needed to be updated continuously to make the vehicle adapt to variable surrounding environment. Two broad categories of terrain what vehicles can drive and can not drive were marked with different colors by using the classification method. The experimental results show that the classification method can make an accurate classification of the terrain taken by trinocular stereovision system.

Reference | Related Articles | Metrics
P2P streaming media server cluster deployment algorithm based on cloud computing
MO Zhichao ZHANG Weizhan WANG Jun ZHEN Yan
Journal of Computer Applications    2014, 34 (2): 365-368.  
Abstract565)      PDF (562KB)(612)       Save
Concerning the high bandwidth occupation problem caused by deploying the P2P (Peer-to-Peer) streaming media server cluster on the Data Center Network (DCN) in cloud, a P2P streaming media server cluster deployment algorithm based on cloud computing was proposed. This algorithm modeled the P2P streaming media server cluster deployment as a quadratic assignment problem, and sought the mapping relationship between each virtual streaming media server and each deployment point to realize the P2P streaming media server cluster deployment based on cloud computing. The simulation experiment demonstrates that the P2P streaming media server cluster deployment algorithm based on cloud computing can effectively reduce the bandwidth usage of DCN in cloud.
Related Articles | Metrics